3,955 research outputs found

    Remote booting in a hostile world: to whom am I speaking? [Computer security]

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”Today's networked computer systems are very vulnerable to attack: terminal software, like that used by the X Window System, is frequently passed across a network, and a trojan horse can easily be inserted while it is in transit. Many other software products, including operating systems, load parts of themselves from a server across a network. Although users may be confident that their workstation is physically secure, some part of the network to which they are attached almost certainly is not secure. Most proposals that recommend cryptographic means to protect remotely loaded software also eliminate the advantages of remote loading-for example, ease of reconfiguration, upgrade distribution, and maintenance. For this reason, they have largely been abandoned before finding their way into commercial products. The article shows that, contrary to intuition, it is no more difficult to protect a workstation that loads its software across an insecure network than to protect a stand-alone workstation. In contrast to prevailing practice, the authors make essential use of a collision-rich hash function to ensure that an exhaustive off-line search by the opponent will produce not one, but many candidate pass words. This strategy forces the opponent into an open, on-line guessing attack and offers the user a defensive strategy unavailable in the case of an off-line attack.Peer reviewe

    User Authentication for the Internet of Things

    Get PDF
    Having been talked about under a variety of names for two or three decades, the Internet of Things is finally coming to fruition. What is still missing, though, is a proper security architecture for it. That currently deployed IoT devices are insecure is testified by the plethora of vulnerabilities that are discovered and exploited daily: clearly “features” are higher priority than “security” in the eyes of the purchasers—and therefore of the manufacturers. But we are talking here of a more structural problem: not “this device is insecure” but “there is no strategic plan and no accepted blueprint to make IoT devices secure”. We should also bear in mind that if purchasers do not understand security vulnerabilities, or cannot articulate their understanding, then manufacturers are unlikely to address them. In this position paper we do not address IoT security in general: instead we focus specifically on the problem of user authentication, addressing which is a pre-requisite of any security architecture insofar as the three crucial security properties of Confidentiality, Integrity and Availability can only be defined in terms of the distinction between authorized and unauthorized users of the sys- tem. However, we should not be misled by the word “authorized”; authorized users may misbehave.ERC 30722

    Picoheterotroph (Bacteria and Archaea) biomass distribution in the global ocean

    Get PDF
    We compiled a database of 39 766 data points consisting of flow cytometric and microscopical measurements of picoheterotroph abundance, including both Bacteria and Archaea. After gridding with 1° spacing, the database covers 1.3% of the ocean surface. There are data covering all ocean basins and depths except the Southern Hemisphere below 350m or from April until June. The average picoheterotroph biomass is 3.9 ± 3.6 µg Cl-1 with a 20-fold decrease between the surface and the deep sea. We estimate a total ocean inventory of about 1.3 × 1029 picoheterotroph cells. Surprisingly, the abundance in the coastal regions is the same as at the same depths in the open ocean. Using an average of published open ocean measurements for the conversion from abundance to carbon biomass of 9.1 fg cell-1, we calculate a picoheterotroph carbon inventory of about 1.2 Pg C. The main source of uncertainty in this inventory is the conversion factor from abundance to biomass. Picoheterotroph biomass is ? 2 times higher in the tropics than in the polar oceans

    Evidence for aggregation and export of cyanobacteria and nano-eukaryotes from the Sargasso Sea euphotic zone

    Get PDF
    Pico-plankton and nano-plankton are generally thought to represent a negligible fraction of the total particulate organic carbon (POC) export flux in oligotrophic gyres due to their small size, slow individual sinking rates, and tight grazer control that leads to high rates of recycling in the euphotic zone. Based upon recent inverse modeling and network analysis however, it has been hypothesized that pico-plankton, including the cyanobacteria <i>Synechococcus</i> and <i>Prochlorococcus</i>, and nano-plankton contribute significantly to POC export, via formation and gravitational settling of aggregates and/or consumption of those aggregates by mesozooplankton, in proportion to their contribution to net primary production. This study presents total suspended particulate (>0.7 μm) and particle size-fractionated (10–20 μm, 20–53 μm, >53 μm) pigment concentrations from within and below the euphotic zone in the oligotrophic subtropical North Atlantic, collected using Niskin bottles and large volume in-situ pumps, respectively. Results show the indicator pigments for <i>Synechococcus</i>, <i>Prochlorococcus</i> and nano-eukaryotes are; (1) found at depths down to 500 m, and; (2) essentially constant, relative to the sum of all indicator pigments, across particle size fractions ranging from 10 μm to >53 μm. Based upon the presence of chlorophyll precursor and degradation pigments, and that in situ pumps do not effectively sample fecal pellets, it is concluded that these pigments were redistributed to deeper waters on larger, more rapidly sinking aggregates likely by gravitational settling and/or convective mixing. Using available pigment and ancillary data from these cruises, these <i>Synechococcus, Prochlorococcus</i> and nano-plankton derived aggregates are estimated to contribute 2–13% (5 ± 4%), 1–20% (5 ± 7%), and 6–43% (23 ± 14%) of the total sediment trap POC flux measured on the same cruises, respectively. Furthermore, nano-eukaryotes contribute equally to POC export and autotrophic biomass, while cyanobacteria contributions to POC export are one-tenth of their contribution to autotrophic biomass. These field observations provide direct evidence that pico- and nano-plankton represent a significant contribution to the total POC export via formation of aggregates in this oligotrophic ocean gyre. We suggest that aggregate formation and fate should be included in ecosystem models, particularly as oligotrophic regions are hypothesized to expand in areal extent with warming and increased stratification in the future

    Why national health research systems matter

    Get PDF
    Some of the most outstanding problems in Computer Science (e.g. access to heterogeneous information sources, use of different e-commerce standards, ontology translation, etc.) are often approached through the identification of ontology mappings. A manual mapping generation slows down, or even makes unfeasible, the solution of particular cases of the aforementioned problems via ontology mappings. Some algorithms and formal models for partial tasks of automatic generation of mappings have been proposed. However, an integrated system to solve this problem is still missing. In this paper, we present AMON, a platform for automatic ontology mapping generation. First of all, we show the general structure. Then, we describe the current version of the system, including the ontology in which it is based, the similarity measures that it uses, the access to external sources, etc

    Conducting value for money analyses for non-randomised interventional studies including service evaluations : an educational review with recommendations

    Get PDF
    This article provides an educational review covering the consideration of conducting ‘value for money’ analyses as part of non-randomised study designs including service evaluations. These evaluations represent a vehicle for producing evidence such as value for money of a care intervention or service delivery model. Decision makers including charities and local and national governing bodies often rely on evidence from non-randomised data and service evaluations to inform their resource allocation decision-making. However, as randomised data obtained from randomised controlled trials are considered the ‘gold standard’ for assessing causation, the use of this alternative vehicle for producing an evidence base requires careful consideration. We refer to value for money analyses, but reflect on methods associated with economic evaluations as a form of analysis used to inform resource allocation decision-making alongside a finite budget. Not all forms of value for money analysis are considered a full economic evaluation with implications for the information provided to decision makers. The type of value for money analysis to be conducted requires considerations such as the outcome(s) of interest, study design, statistical methods to control for confounding and bias, and how to quantify and describe uncertainty and opportunity costs to decision makers in any resulting value for money estimates. Service evaluations as vehicles for producing evidence present different challenges to analysts than what is commonly associated with research, randomised controlled trials and health technology appraisals, requiring specific study design and analytic considerations. This educational review describes and discusses these considerations, as overlooking them could affect the information provided to decision makers who may make an ‘ill-informed’ decision based on ‘poor’ or ‘inaccurate’ information with long-term implications. We make direct comparisons between randomised controlled trials relative to non-randomised data as vehicles for assessing causation; given ‘gold standard’ randomised controlled trials have limitations. Although we use UK-based decision makers as examples, we reflect on the needs of decision makers internationally for evidence-based decision-making specific to resource allocation. We make recommendations based on the experiences of the authors in the UK, reflecting on the wide variety of methods available, used as documented in the empirical literature. These methods may not have been fully considered relevant to non-randomised study designs and/or service evaluations, but could improve and aid the analysis conducted to inform the relevant value for money decision problem

    Managing human factors in retrofit projects

    Get PDF

    A panel model for predicting the diversity of internal temperatures from English dwellings

    Get PDF
    Using panel methods, a model for predicting daily mean internal temperature demand across a heterogeneous domestic building stock is developed. The model offers an important link that connects building stock models to human behaviour. It represents the first time a panel model has been used to estimate the dynamics of internal temperature demand from the natural daily fluctuations of external temperature combined with important behavioural, socio-demographic and building efficiency variables. The model is able to predict internal temperatures across a heterogeneous building stock to within ~0.71°C at 95% confidence and explain 45% of the variance of internal temperature between dwellings. The model confirms hypothesis from sociology and psychology that habitual behaviours are important drivers of home energy consumption. In addition, the model offers the possibility to quantify take-back (direct rebound effect) owing to increased internal temperatures from the installation of energy efficiency measures. The presence of thermostats or thermostatic radiator valves (TRV) are shown to reduce average internal temperatures, however, the use of an automatic timer is statistically insignificant. The number of occupants, household income and occupant age are all important factors that explain a proportion of internal temperature demand. Households with children or retired occupants are shown to have higher average internal temperatures than households who do not. As expected, building typology, building age, roof insulation thickness, wall U-value and the proportion of double glazing all have positive and statistically significant effects on daily mean internal temperature. In summary, the model can be used as a tool to predict internal temperatures or for making statistical inferences. However, its primary contribution offers the ability to calibrate existing building stock models to account for behaviour and socio-demographic effects making it possible to back-out more accurate predictions of domestic energy demand

    Evidence-informed health policy: are we beginning to get there at last

    Get PDF
    This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
    • …
    corecore